翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

stress majorization : ウィキペディア英語版
stress majorization
Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of ''n'' ''m''-dimensional data items, a configuration ''X'' of ''n'' points in ''r(<\sigma(X). Usually ''r'' is 2 or 3, i.e. the ''(r'' x ''n)'' matrix ''X'' lists points in 2- or 3-dimensional Euclidean space so that the result may be visualised (i.e. an MDS plot). The function \sigma is a cost or loss function that measures the squared differences between ideal (m-dimensional) distances and actual distances in ''r''-dimensional space. It is defined as:
: \sigma(X)=\sum_w_(d_(X)-\delta_)^2
where w_\ge 0 is a weight for the measurement between a pair of points (i,j), d_(X) is the euclidean distance between i and j and \delta_ is the ideal distance between the points (their separation) in the m-dimensional data space. Note that w_ can be used to specify a degree of confidence in the similarity between points (e.g. 0 can be specified if there is no information for a particular pair).
A configuration X which minimizes \sigma(X) gives a plot in which points that are close together correspond to points that are also close together in the original m-dimensional data space.
There are many ways that \sigma(X) could be minimized. For example, Kruskal〔.〕 recommended an iterative steepest descent approach. However, a significantly better (in terms of guarantees on, and rate of, convergence) method for minimizing stress was introduced by Jan de Leeuw.〔.〕 De Leeuw's ''iterative majorization'' method at each step minimizes a simple convex function which both bounds \sigma from above and touches the surface of \sigma at a point Z, called the ''supporting point''. In convex analysis such a function is called a ''majorizing'' function. This iterative majorization process is also referred to as the SMACOF algorithm ("Scaling by majorizing a convex function").
== The SMACOF algorithm ==
The stress function \sigma can be expanded as follows:
:
\sigma(X)=\sum_w_(d_(X)-\delta_)^2
=\sum_w_\delta_^2 + \sum_w_d_^2(X)-2\sum_w_\delta_d_(X)

Note that the first term is a constant C and the second term is quadratic in X (i.e. for the Hessian matrix V the second term is equivalent to trX'VX) and therefore relatively easily solved. The third term is bounded by:
:
\sum_w_\delta_d_(X)=\,\operatorname\, X'B(X)X \ge \,\operatorname\, X'B(Z)Z

where B(Z) has:
: b_=-\frac} for d_(Z)\ne 0, i \ne j
and b_=0 for d_(Z)=0, i\ne j
and b_=-\sum_^n b_.
Proof of this inequality is by the Cauchy-Schwarz inequality, see Borg〔.〕 (pp. 152–153).
Thus, we have a simple quadratic function \tau(X,Z) that majorizes stress:
: \sigma(X)=C+\,\operatorname\, X'VX - 2 \,\operatorname\, X'B(X)X

: \le C+\,\operatorname\, X' V X - 2 \,\operatorname\, X'B(Z)Z = \tau(X,Z)

The iterative minimization procedure is then:
* at the kth step we set Z\leftarrow X^
* X^k\leftarrow \min_X \tau(X,Z)
* stop if \sigma(X^)-\sigma(X^)<\epsilon otherwise repeat.
This algorithm has been shown to decrease stress monotonically (see de Leeuw〔).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「stress majorization」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.